vsr transformer
Country:
Technology:
Supplementary Materials: Rethinking Alignment in Video Super-Resolution Transformers
The proposed patch alignment method can also be applied to the recurrent VSR framework. VSR and have achieved the state-of-the-art performance. Transformer backbone, we can easily build a recurrent VSR Transformer. Alignment modules are not absent in the existing recurrent methods. The feature size is set to 100, and the number of attention heads is 4. The baseline is the original BasicVSR++ model that uses FGDC and CNN backbone.
Country:
Technology:
Country:
- Asia > China > Guangdong Province > Shenzhen (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Technology: